52 research outputs found

    HAM: Cross-cutting Concerns in Eclipse

    Get PDF
    As programs evolve, newly added functionality sometimes does no longer align with the original design, ending up scattered across the software system. Aspect mining tries to identify such cross-cutting concerns in a program to support maintenance, or as a first step towards an aspect-oriented program. Previous approaches to aspect mining applied static or dynamic program analysis techniques to a single version of a system.We leverage all versions from a system\u27s CVS history to mine aspect candidates with our Eclipse plug-in HAM: when a single CVS commit adds calls to the same (small) set of methods in many unrelated locations, these method calls are likely to be cross-cutting. HAM employs formal concept analysis to identify aspect candidates. Analysing one commit at a time makes the approach scale to industrial-sized programs. In an evaluation we mined cross-cutting concerns from Eclipse 3.2M3 and found that up to 90% of the top-10 aspect candidates are truly cross-cutting concerns

    Mining Eclipse for CrossCutting

    Get PDF

    Mining Additions of Method Calls in ArgoUML

    Get PDF
    In this paper we refine the classical co-change to the addition of method calls. We use this concept to find usage patterns and to identify cross-cutting concerns for ArgoUML

    Mining Object Behavior with ADABU.

    Get PDF
    ABSTRACT To learn what constitutes correct program behavior, one can start with normal behavior. We observe actual program executions to construct state machines that summarize object behavior. These state machines, called object behavior models, capture the relationships between two kinds of methods: mutators that change the state (such as add()) and inspectors that keep the state unchanged (such as isEmpt

    Mining Object Behavior with ADABU.

    Get PDF
    ABSTRACT To learn what constitutes correct program behavior, one can start with normal behavior. We observe actual program executions to construct state machines that summarize object behavior. These state machines, called object behavior models, capture the relationships between two kinds of methods: mutators that change the state (such as add()) and inspectors that keep the state unchanged (such as isEmpt

    RNA reference materials with defined viral RNA loads of SARS-CoV-2—A useful tool towards a better PCR assay harmonization

    Get PDF
    SARS-CoV-2, the cause of COVID-19, requires reliable diagnostic methods to track the circulation of this virus. Following the development of RT-qPCR methods to meet this diagnostic need in January 2020, it became clear from interlaboratory studies that the reported Ct values obtained for the different laboratories showed high variability. Despite this the Ct values were explored as a quantitative cut off to aid clinical decisions based on viral load. Consequently, there was a need to introduce standards to support estimation of SARS-CoV-2 viral load in diagnostic specimens. In a collaborative study, INSTAND established two reference materials (RMs) containing heat-inactivated SARS-CoV-2 with SARS-CoV-2 RNA loads of ~107 copies/mL (RM 1) and ~106 copies/mL (RM 2), respectively. Quantification was performed by RT-qPCR using synthetic SARS-CoV-2 RNA standards and digital PCR. Between November 2020 and February 2021, German laboratories were invited to use the two RMs to anchor their Ct values measured in routine diagnostic specimens, with the Ct values of the two RMs. A total of 305 laboratories in Germany were supplied with RM 1 and RM 2. The laboratories were requested to report their measured Ct values together with details on the PCR method they used to INSTAND. This resultant 1,109 data sets were differentiated by test system and targeted gene region. Our findings demonstrate that an indispensable prerequisite for linking Ct values to SARS-CoV-2 viral loads is that they are treated as being unique to an individual laboratory. For this reason, clinical guidance based on viral loads should not cite Ct values. The RMs described were a suitable tool to determine the specific laboratory Ct for a given viral load. Furthermore, as Ct values can also vary between runs when using the same instrument, such RMs could be used as run controls to ensure reproducibility of the quantitative measurements.Peer Reviewe

    Style - A Practical Type Checker for Scheme

    No full text
    This paper describes an new tool for finding errors in R 4 RS-compliant Scheme programs. A polymorphic type system in the style of Damas & Milner (1982) with an additional maximum type is used to type Scheme code. Although Scheme is dynamically typed, most parts of programs are statically typeable; type inconsistencies are regarded as hints to possible programming errors. The paper first introduces a type system which is a careful balance between rigorous type safety and pragmatic type softness. An efficient and portable implementation based on order sorted unification in Scheme is then described. We obtained very satisfactory results on realistic programs, including the programs in Abelson, Sussman & Sussman (1985). 1 Introduction Finding errors in Scheme programs is painful. One major reason is that Scheme is a dynamically typed language: all names in Scheme programs can hold objects of arbitrary, undeclared type which may change during runtime. Unlike most modern languages like H..

    Algorithmen zur Begriffsanalyse und ihre Anwendung bei Softwarebibliotheken

    Get PDF
    Formale Begriffsanalyse ist eine algebraische Theorie über binäre Relationen und zu ihnen in enger Verbindung stehende vollständige Verbände. Diese Arbeit präsentiert Algorithmen und Datenstrukturen zur Berechnung von Begriffen und ihrer Verbandsstruktur. Da die Anzahl der Begriffe im ungünstigsten Fall exponentiell mit der Größe der Relation ansteigen kann, wurde der Komplexität der Algorithmen besondere Aufmerksamkeit geschenkt. Sowohl die Laufzeit der Algorithmen, als auch die tatsächliche Größe von Begriffsverbänden wurde durch eine Vielzahl von Experimenten untersucht. Sie zeigen, daß für praktische Anwendungen die Laufzeit der Algorithmen und die Größe der Verbände nur quadratisch von der Größe der Relation abhängt. Als Anwendung der Begriffsanalyse wird die Organisation einer Bibliothek wiederverwendbarer Software-Komponenten vorgeschlagen. Softwarewiederverwendung zielt auf eine Steigerung der Software-Qualität und der Produktivität bei ihrer Erstellung. Die vorgeschlagene Methode vereinigt eine leichte Wartbarkeit der Komponentensammlung mit einer starken Unterstützung für den Anwender bei der Suche nach Komponenten. Das Werkzeug zur Suche verwendet als Datenstruktur den Begriffsverband, der einmalig für eine Sammlung berechnet wird. Der Verband enthält im Wesentlichen alle Entscheidungsmöglichkeiten eines Anwenders bei der Suche nach Komponenten und unterstützt die effiziente Bearbeitung einer Anfrage. Zusätzlich kann mit Hilfe des Verbandes die Qualität der Indexierung von Softwarekomponenten beurteilt werden.Formal concept analysis is an algebraic theory concerning binary relations and closely related complete lattices of so-called concepts. This thesis presents algorithms and data structures to compute concepts and their lattice structure. Since, in the worst case, the number of concepts can grow exponentially with the size of a relation, the complexity of algorithms was given special attention. The speed of algorithms as well as the actual size of concept lattices was tested with a large number of test cases. They show that for practical applications the performance of algorithms and the number of concepts depends only quadratically on the size of the relation. As an application for concept analysis the organization of a library of re-usable software components is proposed. Component based software reuse aims to raise software quality and development productivity by re-using already successfully developed components. The proposed method combines good maintainability of the component library with strong navigation support for the user. The search tool uses the concept lattice as a data structure that is computed once for the library. The lattice essentially contains all decisions a user can take while searching for a component and thus supports efficient searching. Additionally the lattice permits reasoning about the quality of the indexing method used on the components
    • …
    corecore